71 research outputs found

    Multi-Attribute Decision Making using Weighted Description Logics

    Full text link
    We introduce a framework based on Description Logics, which can be used to encode and solve decision problems in terms of combining inference services in DL and utility theory to represent preferences of the agent. The novelty of the approach is that we consider ABoxes as alternatives and weighted concept and role assertions as preferences in terms of possible outcomes. We discuss a relevant use case to show the benefits of the approach from the decision theory point of view

    Reasoning with Contextual Knowledge and Influence Diagrams

    Full text link
    Influence diagrams (IDs) are well-known formalisms extending Bayesian networks to model decision situations under uncertainty. Although they are convenient as a decision theoretic tool, their knowledge representation ability is limited in capturing other crucial notions such as logical consistency. We complement IDs with the light-weight description logic (DL) EL to overcome such limitations. We consider a setup where DL axioms hold in some contexts, yet the actual context is uncertain. The framework benefits from the convenience of using DL as a domain knowledge representation language and the modelling strength of IDs to deal with decisions over contexts in the presence of contextual uncertainty. We define related reasoning problems and study their computational complexity

    Statistical Reconstruction Methods for 3D Imaging of Biological Samples with Electron Microscopy

    Get PDF
    Electron microscopy has emerged as the leading method for the in vivo study of biological structures such as cells, organelles, protein molecules and virus like particles. By providing 3D images up to near atomic resolution, it plays a significant role in analyzing complex organizations, understanding physiological functions and developing medicines. The 3D images representing the electrostatic potential distribution are reconstructed by utilizing the 2D projection images of the target acquired by electron microscope. There are two main 3D reconstruction techniques in the field of electron microscopy: electron tomography (ET) and single particle reconstruction (SPR). In ET, the projection images are acquired by rotating the specimen for different angles. In SPR, the projection images are obtained by analyzing the images of multiple objects representing the same structure. Then, the tomographic reconstruction methods are applied in both methods to obtain the 3D image through the 2D projections.Physical and mechanical limitations can prevent to acquire projection images that cover the projection angle space completely and uniformly. Incomplete and non-uniform sampling of the projection angles results in anisotropic resolution in the image plane and generates artifacts. Another problem is that the total applied dose of electrons is limited in order to prevent the radiation damage to the biological target. Therefore, limited number of projection images with low signal to noise ratio can be used in the reconstruction process. This affects the resolution of the reconstructed image significantly. This study presents statistical methods to overcome these major challenges to obtain precise and high resolution images in electron microscopy.Statistical image reconstruction methods have been successful in recovering a signal from imperfect measurements due to their capability of utilizing a priori information. First, we developed a sequential application of a statistical method for ET. Then we extended the method to support projection angles freely distributed in 3D space and applied the method in SPR. In both applications, we observed the strength of the method in projection gap filling, robustness against noise, and resolving the high resolution details in comparison with the conventional reconstruction methods. Afterwards, we improved the method in terms of computation time by incorporating multiresolution reconstruction. Furthermore, we developed an adaptive regularization method to minimize the parameters required to be set by the user. We also proposed the local adaptive Wiener filter for the class averaging step of SPR to improve the averaging accuracy.The qualitative and quantitative analysis of the reconstructions with phantom and experimental datasets has demonstrated that the proposed reconstruction methods outperform the conventional reconstruction methods. These statistical approaches provided better image accuracy and higher resolution compared with the conventional algebraic and transfer domain based reconstruction methods. The methods provided in this study contribute to enhance our understanding of cellular and molecular structures by providing 3D images of those with improved accuracy and resolution

    Group reasoning in social environments

    Full text link
    While modeling group decision making scenarios, the existence of a central authority is often assumed which is in charge of amalgamating the preferences of a given set of agents with the aim of computing a socially desirable outcome, for instance, maximizing the utilitarian or the egalitarian social welfare. Departing from this classical perspective and inspired by the growing body of literature on opinion formation and diffusion, a setting for group decision making is studied where agents are selfishly interested and where each of them can adopt her own decision without a central coordination, hence possibly disagreeing with the decision taken by some of the other agents. In particular, it is assumed that agents belong to a social environment and that their preferences on the available alternatives can be influenced by the number of “neighbors” agree- ing/disagreeing with them. The setting is formalized and studied by modeling agents’ reasoning capabilities in terms of weighted propositional logics and by focusing on Nash-stable solutions as the prototypical solution concept. In particular, a thoroughly computational complexity analysis is conducted on the problem of deciding the existence of such stable outcomes. Moreover, for the classes of environments where stability is always guaranteed, the convergence of Nash dynamics consisting of sequences of best response updates is studied, too

    Satisfiability in Strategy Logic can be Easier than Model Checking

    Get PDF
    In the design of complex systems, model-checking and satisfiability arise as two prominent decision problems. While model-checking requires the designed system to be provided in advance, satisfiability allows to check if such a system even exists. With very few exceptions, the second problem turns out to be harder than the first one from a complexity-theoretic standpoint. In this paper, we investigate the connection between the two problems for a non-trivial fragment of Strategy Logic (SL, for short). SL extends LTL with first-order quantifications over strategies, thus allowing to explicitly reason about the strategic abilities of agents in a multi-agent system. Satisfiability for the full logic is known to be highly undecidable, while model-checking is non-elementary.The SL fragment we consider is obtained by preventing strategic quantifications within the scope of temporal operators. The resulting logic is quite powerful, still allowing to express important game-theoretic properties of multi-agent systems, such as existence of Nash and immune equilibria, as well as to formalize the rational synthesis problem. We show that satisfiability for such a fragment is PSPACE-COMPLETE, while its model-checking complexity is 2EXPTIME-HARD. The result is obtained by means of an elegant encoding of the problem into the satisfiability of conjunctive-binding first-order logic, a recently discovered decidable fragment of first-order logic

    RDF validation requirements - evaluation and logical underpinning

    Get PDF
    There are many case studies for which the formulation of RDF constraints and the validation of RDF data conforming to these constraint is very important. As a part of the collaboration with the W3C and the DCMI working groups on RDF validation, we identified major RDF validation requirements and initiated an RDF validation requirements database which is available to contribute at http://purl.org/net/rdf-validation. The purpose of this database is to collaboratively collect case studies, use cases, requirements, and solutions regarding RDF validation. Although, there are multiple constraint languages which can be used to formulate RDF constraints (associated with these requirements), there is no standard way to formulate them. This paper serves to evaluate to which extend each requirement is satisfied by each of these constraint languages. We take reasoning into account as an important pre-validation step and therefore map constraints to DL in order to show that each constraint can be mapped to an ontology describing RDF constraints generically

    Group decision making via probabilistic belief merging

    Full text link
    We propose a probabilistic-logical framework for group decision-making. Its main characteristic is that we derive group preferences from agents’ beliefs and utilities rather than from their individual preferences as done in social choice approaches. This can be more appropriate when the individual preferences hide too much of the individuals’ opinions that determined their preferences. We introduce three preference relations and investigate the relationships between the group preferences and in-dividual and subgroup preferences

    uDecide: A protégé plugin for multiattribute decision making

    Get PDF
    This paper introduces the Protege plugin uDecide. With the help of uDecide it is possible to solve multi-attribute decision making problems encoded in a straight forward extension of standard Description Logics. The formalism allows to specify background knowledge in terms of an ontology, while each attribute is represented as a weighted class expression. On top of such an approach one can compute the best choice (or the best k-choices) taking background knowledge into account in the appropriate way. We show how to implement the approach on top of existing semantic web technologies and demonstrate its benefits with the help of an interesting use case that illustrates how to convert an existing web resource into an expert system with the help of uDecide
    • …
    corecore